我们提出了一种具有有限目标语言数据的交叉语言内容标记的新颖框架,这在预测性能方面显着优于现有的工作。该框架基于最近的邻居架构。它是Vanilla K-最近邻模型的现代实例化,因为我们在所有组件中使用变压器表示。我们的框架可以适应新的源语言实例,而无需从头开始侦察。与基于邻域的方法的事先工作不同,我们基于查询邻的交互对邻居信息进行编码。我们提出了两个编码方案,并使用定性和定量分析显示其有效性。我们的评估结果是来自两个不同数据集的八种语言,用于滥用语言检测,在强大的基线上,可以在F1中显示最多9.5(对于意大利语)的大量改进。平均水平,我们在拼图式多语言数据集中的三种语言中实现了3.6的F1改进,2.14在WUL数据集的F1中的改进。
translated by 谷歌翻译
Camera images are ubiquitous in machine learning research. They also play a central role in the delivery of important services spanning medicine and environmental surveying. However, the application of machine learning models in these domains has been limited because of robustness concerns. A primary failure mode are performance drops due to differences between the training and deployment data. While there are methods to prospectively validate the robustness of machine learning models to such dataset drifts, existing approaches do not account for explicit models of the primary object of interest: the data. This makes it difficult to create physically faithful drift test cases or to provide specifications of data models that should be avoided when deploying a machine learning model. In this study, we demonstrate how these shortcomings can be overcome by pairing machine learning robustness validation with physical optics. We examine the role raw sensor data and differentiable data models can play in controlling performance risks related to image dataset drift. The findings are distilled into three applications. First, drift synthesis enables the controlled generation of physically faithful drift test cases. The experiments presented here show that the average decrease in model performance is ten to four times less severe than under post-hoc augmentation testing. Second, the gradient connection between task and data models allows for drift forensics that can be used to specify performance-sensitive data models which should be avoided during deployment of a machine learning model. Third, drift adjustment opens up the possibility for processing adjustments in the face of drift. This can lead to speed up and stabilization of classifier training at a margin of up to 20% in validation accuracy. A guide to access the open code and datasets is available at https://github.com/aiaudit-org/raw2logit.
translated by 谷歌翻译